5,478 research outputs found

    Secret-key rates and privacy leakage in biometric systems

    Get PDF
    In this thesis both the generation of secret keys from biometric data and the binding of secret keys to biometric data are investigated. These secret keys can be used to regulate access to sensitive data, services, and environments. In a biometric secrecy system a secret key is generated or chosen during an enrollment procedure in which biometric data are observed for the first time. This key is to be reconstructed after these biometric data are observed for the second time when authentication is required. Since biometric measurements are typically noisy, reliable biometric secrecy systems also extract so-called helper data from the biometric observation at the time of enrollment. These helper data facilitate reliable reconstruction of the secret key in the authentication process. Since the helper data are assumed to be public, they should not contain information about the secret key. We say that the secrecy leakage should be negligible. Important parameters of biometric key-generation and key-binding systems include the size of the generated or chosen secret key and the information that the helper data contain (leak) about the biometric observation. This latter parameter is called privacy leakage. Ideally the privacy leakage should be small, to prevent the biometric data of an individual from being compromised. Moreover, the secret-key length (also characterized by the secret-key rate) should be large to minimize the probability that the secret key is guessed and unauthorized access is granted. The first part of this thesis mainly focuses on the fundamental trade-off between the secret-key rate and the privacy-leakage rate in biometric secret-generation and secretbinding systems. This trade-off is studied from an information-theoretical perspective for four biometric settings. The first setting is the classical secret-generation setting as proposed by Maurer [1993] and Ahlswede and Csiszár [1993]. For this setting the achievable secret-key vs. privacy-leakage rate region is determined in this thesis. In the second setting the secret key is not generated by the terminals, but independently chosen during enrollment (key binding). Also for this setting the region of achievable secret-key vs. privacy-leakage rate pairs is determined. In settings three and four zero-leakage systems are considered. In these systems the public message should contain only a negligible amount of information about both the secret key and the biometric enrollment sequence. To achieve this, a private key is needed, which can be observed only by the two terminals. Again both the secret generation setting and chosen secret setting are considered. For these two cases the regions of achievable secret-key vs. private-key rate pairs are determined. For all four settings two notions of leakage are considered. Depending on whether one looks at secrecy and privacy leakage separately or in combination, unconditional or conditional privacy leakage is considered. Here unconditional leakage corresponds to the mutual information between the helper data and the biometric enrollment sequence, while the conditional leakage relates to the conditional version of this mutual information, given the secret. The second part of the thesis focuses on the privacy- and secrecy-leakage analysis of the fuzzy commitment scheme. Fuzzy commitment, proposed by Juels and Wattenberg [1999], is, in fact, a particular realization of a binary biometric secrecy system with a chosen secret key. In this scheme the helper data are constructed as a codeword from an error-correcting code, used to encode a chosen secret, masked with the biometric sequence that has been observed during enrollment. Since this scheme is not privacy preserving in the conditional privacy-leakage sense, the unconditional privacy-leakage case is investigated. Four cases of biometric sources are considered, i.e. memoryless and totally-symmetric biometric sources, memoryless and input-symmetric biometric sources, memoryless biometric sources, and stationary and ergodic biometric sources. For the first two cases the achievable rate-leakage regions are determined. In these cases the secrecy leakage rate need not be positive. For the other two cases only outer bounds on achievable rate-leakage regions are found. These bounds, moreover, are sharpened for fuzzy commitment based on systematic parity-check codes. Using the fundamental trade-offs found in the first part of this thesis, it is shown that fuzzy commitment is only optimal for memoryless totally-symmetric biometric sources and only at the maximum secret-key rate. Moreover, it is demonstrated that for memoryless and stationary ergodic biometric sources, which are not input-symmetric, the fuzzy commitment scheme leaks information on both the secret key and the biometric data. Biometric sequences have an often unknown statistical structure (model) that can be quite complex. The last part of this dissertation addresses the problem of finding the maximum a posteriori (MAP) model for a pair of observed biometric sequences and the problem of estimating the maximum secret-key rate from these sequences. A universal source coding procedure called the Context-TreeWeighting (CTW) method [1995] can be used to find this MAP model. In this thesis a procedure that determines the MAP model, based on the so-called beta-implementation of the CTW method, is proposed. Moreover, CTW methods are used to compress the biometric sequences and sequence pairs in order to estimate the mutual information between the sequences. However, CTW methods were primarily developed for compressing onedimensional sources, while biometric data are often modeled as two-dimensional processes. Therefore it is proved here that the entropy of a stationary two-dimensional source can be expressed as a limit of a series of conditional entropies. This result is also extended to the conditional entropy of one two-dimensional source given another one. As a consequence entropy and mutual information estimates can be obtained from CTW methods using properly-chosen templates. Using such techniques estimates of the maximum secret-key rate for physical unclonable functions (PUFs) are determined from a data-set of observed sequences. PUFs can be regarded as inanimate analogues of biometrics

    DNA sequence modeling based on context trees

    Get PDF
    Genomic sequences contain instructions for protein and cell production. Therefore understanding and identification of biologically and functionally meaningful patterns in DNA sequences is of paramount importance. Modeling of DNA sequences in its turn can help to better understand and identify such patterns and dependencies between them. It is well-known that genomic data contains various regions with distinct functionality and thus also statistical properties. In this work we focus on modeling of such individual regions of distinct functionalities. We apply the concept of context trees to model these DNA regions. Based on the Minimum Description Length principle, we use the estimated compression rate of a genomic region, given such models, as a similarity measure. We show that the constructed model can be used to distinguish specific genes within DNA sequences

    Privacy leakage in biometric secrecy systems

    Get PDF
    Motivated by Maurer [1993], Ahlswede and Csiszar [1993] introduced the concept of secret sharing. In their source model two terminals observe two correlated sequences. It is the objective of both terminals to form a common secret by interchanging a public message (helper data), that should contain only a negligible amount of information about the secret. Ahlswede and Csiszar showed that the maximum secret key rate that can be achieved in this way is equal to the mutual information between the two source outputs. In a biometric setting, where the sequences correspond to the enrollment and authentication data, it is crucial that the public message leaks as little information as possible about the biometric data, since compromised biometric data cannot be replaced. We investigate the fundamental trade-offs for four biometric settings. The first one is the standard (Ahlswede-Csiszar) secret generation setting, for which we determine the secret key rate - privacy leakage region. Here leakage corresponds to the mutual information between helper data and biometric enrollment sequence conditional on the secret. In the second setting the secret is not generated by the terminals but independently chosen, and transmitted using a public message. Again we determine the region of achievable rate - leakage pairs. In setting three and four we consider zero-leakage, i.e. the public message contains only a negligible amount of information about the secret and the biometric enrollment sequence. To achieve this a private key is needed which can be observed only by the terminals. We consider again both secret generation and secret transmission and determine for both cases the region of achievable secret key rate - private key rate pairs

    On the security of XOR-method in biometric authentication systems

    Get PDF
    A biometric authentication system can be partitioned into a layer that extracts common randomness out of a pair of related biometric sequences [Maurer, 1993] and a layer that masks a key sequence with this common randomness. We will analyze the performance of such a layered system first, and will show that an alternative method, the XOR- echnique, is not always secure

    Hierarchical attribute-based encryption and decryption

    Get PDF
    A domain authority 13 for use in a hierarchy of domain authorities in a hierarchical cryptographic system. The domain authority 13 comprises a user secret key generator 21 for generating a user secret key based on a domain secret key and one or more attribute representations, to obtain a user secret key associated with a set of attributes corresponding to the attribute representations, and wherein the domain secret key is based on a domain secret key of a parent domain authority or a root secret key of a root authority of the hierarchy of domain authorities, and wherein the attribute representations are independent of the hierarchy.A decrypter makes use of the user secret key. An encrypter generates ciphertext decryptable by the decrypter

    Secure key storage with PUFs

    Get PDF
    Nowadays, people carry around devices (cell phones, PDAs, bank passes, etc.) that have a high value. That value is often contained in the data stored in it or lies in the services the device can grant access to (by using secret identification information stored in it). These devices often operate in hostile environments and their protection level is not adequate to deal with that situation. Bank passes and credit cards contain a magnetic stripe where identification information is stored. In the case of bank passes, a PIN is additionally required to withdraw money from an ATM (Automated Teller Machine). At various occasions, it has been shown that by placing a small coil in the reader, the magnetic information stored in the stripe can easily be copied and used to produce a cloned card. Together with eavesdropping the PIN (by listening to the keypad or recording it with a camera), an attacker can easily impersonate the legitimate owner of the bank pass by using the cloned card in combination with the eavesdropped PIN

    Secret rate - Privacy leakage in biometric systems

    Full text link
    Ahlswede and Csiszár [1993] introduced the concept of secret sharing. In their source model two terminals observe two correlated sequences. It is the objective of the terminals to form a common secret by interchanging a public message (helper data) in such a way that the secrecy leakage is negligible. In a biometric setting, where the sequences correspond to the enrollment and authentication data, respectively, it is crucial that the public message leaks as little information as possible about the biometric data, since compromised biometric data cannot be replaced. We investigated the fundamental trade-offs for four biometric settings. The first one is the standard (Ahlswede Csiszár) secret generation setting, for which we determined the secret-key vs, privacy-leakage rate region. Here leakage corresponds to the mutual information between helper data and biometric enrollment sequence. In the second setting the secret is not generated by the terminals but independently chosen, and transmitted using a public message. Again we determined the region of achievable rate-leakage pairs. In setting three and four we consider zero-leakage, i.e. the public message contains only a negligible amount of information about the secret and about the biometric enrollment sequence. To achieve this a private key is needed, which can be observed only by the terminals. We considered again both secret generation and secret transmission and determined for both cases the region of achievable secret-key vs. private-key rate pairs. © 2009 IEEE

    Quantization effects in biometric systems

    Full text link
    The fundamental secret-key rate vs. privacy-leakage rate trade-offs for secret-key generation and transmission for i.i.d. Gaussian biometric sources are determined. These results are the Gaussian equivalents of the results that were obtained for the discrete case by the authors and independently by Lai et al. in 2008. Also the effect that binary quantization of the biometric sequences has on the ratio of the secret-key rate and privacy-leakage rate is considered. It is shown that the squared correlation coefficient must be increased by a factor of pi2/4 to compensate for such a quantization action, for values of the privacy-leakage rate that approach zero, when the correlation coefficient is close to zero
    • …
    corecore